关于postgres-xl的实验是在我自己笔记本rhel7.2_x64的环境下,用kvm实现的,总共有6台虚拟机:
一台openfiler2.99发布共享存储,一台gtm master,一台gtm slave,三台gtm_proxy/coordinator/datanode,除了openfiler之外其余5台虚拟机皆以最小化安装rhel7.2_x64初始化,且具备两张网卡,一张用于192.168.122.* 提供服务,一张用于10.10.10.* 读取openfiler发布的共享存储,具体的postgres-xl服务规划如下。
服务名称 | 服务作用 | ip | 服务端口 | 服务目录 | pooler port |
---|---|---|---|---|---|
gtm_mast | gtm master | 192.168.122.179 | 20001 | /pgdata/gtm/data | |
gtm_slav | gtm slave | 192.168.122.189 | 20001 | /pgdata/gtm/data | |
gtm_pxy01 | gtm proxy | 192.168.122.171 | 20001 | /pgdata/gtm_pxy01/data | |
gtm_pxy02 | gtm proxy | 192.168.122.172 | 20001 | /pgdata/gtm_pxy02/data | |
gtm_pxy03 | gtm proxy | 192.168.122.173 | 20001 | /pgdata/gtm_pxy03/data | |
coord01 | coordinator | 192.168.122.171 | 15432 | /pgdata/coord01/data | 40101 |
coord02 | coordinator | 192.168.122.172 | 15432 | /pgdata/coord02/data | 40102 |
coord03 | coordinator | 192.168.122.173 | 15432 | /pgdata/coord03/data | 40103 |
datan01 | datanode | 192.168.122.181 | 25431 | /pgdata/datan01/data | 40401 |
datan02 | datanode | 192.168.122.182 | 25432 | /pgdata/datan02/data | 40402 |
datan03 | datanode | 192.168.122.183 | 25433 | /pgdata/datan03/data | 40403 |
一. 虚拟机操作系统配置
1. 主机名配置
对每个虚拟机修改主机名
hostnamectl set-hostname rhel7pg171
将 /etc/hosts 文件修改成格式化文件(带域名就三列,不带域名则两列就行)
从 /etc/hosts 文件可以看出每台虚拟机的主机名为 rhel7pgxxx
# cat /etc/hosts
127.0.0.1 localhost
192.168.122.1 station90
192.168.122.170 rhel7pg170
192.168.122.171 rhel7pg171
192.168.122.172 rhel7pg172
192.168.122.173 rhel7pg173
192.168.122.179 rhel7pg179
192.168.122.189 rhel7pg189
192.168.122.100 openfiler100
192.168.122.181 datan01
192.168.122.182 datan02
192.168.122.183 datan03
2. 安全设置
对每个虚拟机关闭selinux,关闭防火墙
setenforce 0
sed -i.bak "s/SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config
systemctl disable firewalld.service
systemctl stop firewalld.service
iptables --flush
3. 本地yum服务配置
挂载本地cdrom中的操作系统ISO到本地目录
mkdir -p /mnt/iso
mount /dev/cdrom /mnt/iso
# 写入fstab,下次重起操作系统的时候自动挂载
echo "/dev/cdrom /mnt/iso iso9660 defaults 0 0" >> /etc/fstab
创建本地yum源
vi /etc/yum.repos.d/base.repo
cat /etc/yum.repos.d/base.repo
[rhel7]
name=rhel7
baseurl=file:///mnt/iso
gpgcheck=0
[rhel7-HA]
name=rhel7-HA
baseurl=file:///mnt/iso/addons/HighAvailability
gpgcheck=0
[rhel7-RS]
name=rhel7-RS
baseurl=file:///mnt/iso/addons/ResilientStorage
更新yum服务信息
yum clean all
yum list
yum group list
4. 时间同步服务配置
安装chrony包
yum install chrony.x86_64 -y
编辑配置文件,注释掉默认的server,添加已知可用的或者自建的时间服务器,本实验在openfiler的虚拟机192.168.122.100通过ntp发布了一个时间同步源
vi /etc/chrony.conf
# server 0.rhel.pool.ntp.org iburst
# server 1.rhel.pool.ntp.org iburst
# server 2.rhel.pool.ntp.org iburst
# server 3.rhel.pool.ntp.org iburst
server 192.168.122.100 iburst
重启时间同步服务:
systemctl restart chronyd.service
查看时间同步状态:
systemctl status chronyd.service
设置开机启动服务:
systemctl enable chronyd.service
查看时间同步源:
chronyc sources -v
查看时间同步源状态:
chronyc sourcestats -v
5. 重启
重启所有虚拟机,生效主机名和selinux的配置
init 6
二. postgres-xl软件安装
1. 依赖包的安装
各种依赖包:
yum install -y make mpfr libmpc cpp kernel-headers glibc-headers glibc-devel libgomp libstdc++-devel libquadmath libgfortran libgnat libgnat-devel libobjc gcc gcc-c++ libquadmath-devel gcc-gfortran gcc-gnat gcc-objc gcc-objc++ ncurses-devel readline readline-devel zlib-devel m4 flex bison mailcap
perl的支持:
yum install -y perl \
perl-Carp \
perl-constant \
perl-Encode \
perl-Exporter \
perl-File-Path \
perl-File-Temp \
perl-Filter \
perl-Getopt-Long \
perl-HTTP-Tiny \
perl-libs \
perl-macros \
perl-parent \
perl-PathTools \
perl-Pod-Escapes \
perl-podlators \
perl-Pod-Perldoc \
perl-Pod-Simple \
perl-Pod-Usage \
perl-Scalar-List-Utils \
perl-Socket \
perl-Storable \
perl-Text-ParseWords \
perl-threads \
perl-threads-shared \
perl-Time-HiRes \
perl-Time-Local
2. postgres-xl主体的安装
gunzip postgres-xl-9.5r1.4.tar.gz
tar -xvf postgres-xl-9.5r1.4.tar
cd postgres-xl-9.5r1.4
./configure
gmake
gmake install
3. pgxc_ctl插件的安装
pgxc_ctl插件中,假如我们有需要配置备份配置文件的选项的话,需要修改下源代码。postgres-xl-9.5r1.4/contrib/pgxc_ctl源代码中do_command.c的static void init_all(void) 下面第二行,即第524行的init_gtm_master(true);前面,必须插入一行doConfigBackup();再保存编译make&make install , 要不然用pgxc_ctl init all的时候,configBackup=y的功能没办法正常使用
cd postgres-xl-9.5r1.4/contrib/pgxc_ctl
vi do_command.c
make
make install
4. 用户环境配置
此处环境变量要写入 .bashrc 中,因为pgxc_ctl会通过ssh信任协议直接远程到服务器上运行命令,这样的话不会去读取.bash_profile,只会读取.bashrc,如果不把PATH等环境变量配置到.bashrc的化,之后初始化集群init all 的时候就会报“命令不存在”的错误。
/usr/sbin/groupadd -g 2001 postgres
/usr/sbin/useradd -u 2001 -g postgres postgres
echo "postgres_passwd" | passwd --stdin postgres
echo "export PGHOME=/usr/local/pgsql" >> /home/postgres/.bashrc
echo "export LD_LIBRARY_PATH==$PGHOME/lib" >> /home/postgres/.bashrc
echo 'export PG_CONFIG=$PGHOME/bin/pg_config' >> /home/postgres/.bashrc
echo 'export pg_config=$PGHOME/bin/pg_config' >> /home/postgres/.bashrc
echo 'export PATH=$PATH:$PGHOME/bin' >> /home/postgres/.bashrc
5. 链接库文件
source /home/postgres/.bash_profile
echo "$PGHOME/lib" >> /etc/ld.so.conf
/sbin/ldconfig
cat /etc/ld.so.conf
三. postgres-xl初始化
1. ssh互信配置
配置每个节点的postgres用户互信
ssh互信配置方法非常多,可以参考 《配置SSH互信》
http://blog.163.com/cao_jfeng...
我使用的是oracle的脚本
./sshUserSetup.sh -hosts "rhel7pg171 rhel7pg172 rhel7pg173 rhel7pg179 rhel7pg189" -user postgres -advanced -noPromptPassphrase -exverify
测试ssh互通
2. 创建PGDATA目录
3台datanode
mkdir -p /pgdata/datan01
mkdir -p /pgdata/datan02
mkdir -p /pgdata/datan03
mkdir -p /pgdata/coord01
mkdir -p /pgdata/coord02
mkdir -p /pgdata/coord03
mkdir -p /pgdata/gtm_pxy01
mkdir -p /pgdata/gtm_pxy02
mkdir -p /pgdata/gtm_pxy03
gtm master和gtm slave
mkdir -p /pgdata/gtm
所有节点
chown -R postgres:postgres /pgdata
3. datanode的特殊准备
用openfiler配置3个3GB的共享磁盘给datan01 datan02 datan03
在datan01 datan02 datan03上执行
systemctl enable iscsi
iscsiadm -m discovery -t sendtargets -p 10.10.10.100
iscsiadm -m node -T iqn.2006-01.com.openfiler:tsn.bf1f466b7eef -p 10.10.10.100 -l
在其中某个datanode上对共享磁盘进行分区,分成一个分区即可,并对其进行格式化
fdisk /dev/sda
fdisk /dev/sdb
fdisk /dev/sdc
partprobe /dev/sda
partprobe /dev/sdb
partprobe /dev/sdc
mkfs.xfs /dev/sda1
mkfs.xfs /dev/sdb1
mkfs.xfs /dev/sdc1
tune2fs -c 0 -i 0 /dev/sda1
tune2fs -c 0 -i 0 /dev/sdb1
tune2fs -c 0 -i 0 /dev/sdc1
重启所有datanode进行重新识别
挂载测试
mount /dev/sda1 /pgdata/datan01/
mount /dev/sdb1 /pgdata/datan02/
mount /dev/sdc1 /pgdata/datan03/
umount /dev/sda1
umount /dev/sdb1
umount /dev/sdc1
添加每个datanode的临时ip和挂载文件系统
cd /etc/sysconfig/network-scripts/
cp -rp ifcfg-eth0 ifcfg-eth0:1
vi ifcfg-eth0:1
systemctl restart network
mount /dev/sdc1 /pgdata/datan03/
所有节点
chown -R postgres:postgres /pgdata
4. pgxc_ctl配置文件编写
进入gtm_mast
su - postgres
pgxc_ctl
PGXC prepare
PGXC q
cd pgxc_ctl
vi pgxc_ctl.conf
编辑好的配置文件如下:
#!/usr/bin/env bash
#
# Postgres-XC Configuration file for pgxc_ctl utility.
#
# Configuration file can be specified as -c option from pgxc_ctl command. Default is
# $PGXC_CTL_HOME/pgxc_ctl.org.
#
# This is bash script so you can make any addition for your convenience to configure
# your Postgres-XC cluster.
#
# Please understand that pgxc_ctl provides only a subset of configuration which pgxc_ctl
# provide. Here's several several assumptions/restrictions pgxc_ctl depends on.
#
# 1) All the resources of pgxc nodes has to be owned by the same user. Same user means
# user with the same user name. User ID may be different from server to server.
# This must be specified as a variable $pgxcOwner.
#
# 2) All the servers must be reacheable via ssh without password. It is highly recommended
# to setup key-based authentication among all the servers.
#
# 3) All the databases in coordinator/datanode has at least one same superuser. Pgxc_ctl
# uses this user to connect to coordinators and datanodes. Again, no password should
# be used to connect. You have many options to do this, pg_hba.conf, pg_ident.conf and
# others. Pgxc_ctl provides a way to configure pg_hba.conf but not pg_ident.conf. This
# will be implemented in the later releases.
#
# 4) Gtm master and slave can have different port to listen, while coordinator and datanode
# slave should be assigned the same port number as master.
#
# 5) Port nuber of a coordinator slave must be the same as its master.
#
# 6) Master and slave are connected using synchronous replication. Asynchronous replication
# have slight (almost none) chance to bring total cluster into inconsistent state.
# This chance is very low and may be negligible. Support of asynchronous replication
# may be supported in the later release.
#
# 7) Each coordinator and datanode can have only one slave each. Cascaded replication and
# multiple slave are not supported in the current pgxc_ctl.
#
# 8) Killing nodes may end up with IPC resource leak, such as semafor and shared memory.
# Only listening port (socket) will be cleaned with clean command.
#
# 9) Backup and restore are not supported in pgxc_ctl at present. This is a big task and
# may need considerable resource.
#
#========================================================================================
#
#
# pgxcInstallDir variable is needed if you invoke "deploy" command from pgxc_ctl utility.
# If don't you don't need this variable.
pgxcInstallDir=/usr/local/pgsql
#---- OVERALL -----------------------------------------------------------------------------
#
pgxcOwner=postgres # owner of the Postgres-XC databaseo cluster. Here, we use this
# both as linus user and database user. This must be
# the super user of each coordinator and datanode.
pgxcUser=$pgxcOwner # OS user of Postgres-XC owner
tmpDir=/tmp # temporary dir used in XC servers
localTmpDir=$tmpDir # temporary dir used here locally
configBackup=y # If you want config file backup, specify y to this value.
configBackupHost=192.168.122.189 # host to backup config file
configBackupDir=/home/postgres/pgxc_ctl # Backup directory
configBackupFile=pgxc_ctl.conf # Backup file name --> Need to synchronize when original changed.
#---- GTM ------------------------------------------------------------------------------------
# GTM is mandatory. You must have at least (and only) one GTM master in your Postgres-XC cluster.
# If GTM crashes and you need to reconfigure it, you can do it by pgxc_update_gtm command to update
# GTM master with others. Of course, we provide pgxc_remove_gtm command to remove it. This command
# will not stop the current GTM. It is up to the operator.
#---- GTM Master -----------------------------------------------
#---- Overall ----
gtmName=gtm_mast
gtmMasterServer=192.168.122.179
gtmMasterPort=20001
gtmMasterDir=/pgdata/gtm/data
#---- Configuration ---
gtmExtraConfig=none # Will be added gtm.conf for both Master and Slave (done at initilization only)
gtmMasterSpecificExtraConfig=none # Will be added to Master's gtm.conf (done at initialization only)
#---- GTM Slave -----------------------------------------------
# Because GTM is a key component to maintain database consistency, you may want to configure GTM slave
# for backup.
#---- Overall ------
gtmSlave=y # Specify y if you configure GTM Slave. Otherwise, GTM slave will not be configured and
# all the following variables will be reset.
gtmSlaveName=gtm_slav
gtmSlaveServer=192.168.122.189 # value none means GTM slave is not available. Give none if you don't configure GTM Slave.
gtmSlavePort=20001 # Not used if you don't configure GTM slave.
gtmSlaveDir=/pgdata/gtm/data # Not used if you don't configure GTM slave.
# Please note that when you have GTM failover, then there will be no slave available until you configure the slave
# again. (pgxc_add_gtm_slave function will handle it)
#---- Configuration ----
gtmSlaveSpecificExtraConfig=none # Will be added to Slave's gtm.conf (done at initialization only)
#---- GTM Proxy -------------------------------------------------------------------------------------------------------
# GTM proxy will be selected based upon which server each component runs on.
# When fails over to the slave, the slave inherits its master's gtm proxy. It should be
# reconfigured based upon the new location.
#
# To do so, slave should be restarted. So pg_ctl promote -> (edit postgresql.conf and recovery.conf) -> pg_ctl restart
#
# You don't have to configure GTM Proxy if you dont' configure GTM slave or you are happy if every component connects
# to GTM Master directly. If you configure GTL slave, you must configure GTM proxy too.
#---- Shortcuts ------
gtmProxyDir=/pgdata/gtm_pxy
#---- Overall -------
gtmProxy=y # Specify y if you conifugre at least one GTM proxy. You may not configure gtm proxies
# only when you dont' configure GTM slaves.
# If you specify this value not to y, the following parameters will be set to default empty values.
# If we find there're no valid Proxy server names (means, every servers are specified
# as none), then gtmProxy value will be set to "n" and all the entries will be set to
# empty values.
gtmProxyNames=(gtm_pxy01 gtm_pxy02 gtm_pxy03) # No used if it is not configured
gtmProxyServers=(192.168.122.171 192.168.122.172 192.168.122.173) # Specify none if you dont' configure it.
gtmProxyPorts=(20001 20001 20001) # Not used if it is not configured.
gtmProxyDirs=($gtmProxyDir'01/data' $gtmProxyDir'02/data' $gtmProxyDir'03/data')# Not used if it is not configured.
#---- Configuration ----
gtmPxyExtraConfig=none # Extra configuration parameter for gtm_proxy. Coordinator section has an example.
gtmPxySpecificExtraConfig=(none none none)
#---- Coordinators ----------------------------------------------------------------------------------------------------
#---- shortcuts ----------
coordMasterDir=/pgdata/coord
##coordSlaveDir=$HOME/pgxc/nodes/coord_slave
##coordArchLogDir=$HOME/pgxc/nodes/coord_archlog
#---- Overall ------------
coordNames=(coord01 coord02 coord03) # Master and slave use the same name
coordPorts=(15432 15432 15432) # Master ports
poolerPorts=(40101 40102 40103) # Master pooler ports
coordPgHbaEntries=(0.0.0.0/0) # Assumes that all the coordinator (master/slave) accepts
# the same connection
# This entry allows only $pgxcOwner to connect.
# If you'd like to setup another connection, you should
# supply these entries through files specified below.
# Note: The above parameter is extracted as "host all all 0.0.0.0/0 trust". If you don't want
# such setups, specify the value () to this variable and suplly what you want using coordExtraPgHba
# and/or coordSpecificExtraPgHba variables.
#coordPgHbaEntries=(::1/128) # Same as above but for IPv6 addresses
#---- Master -------------
coordMasterServers=(192.168.122.171 192.168.122.172 192.168.122.173) # none means this master is not available
coordMasterDirs=($coordMasterDir'01/data' $coordMasterDir'02/data' $coordMasterDir'03/data')
coordMaxWALsernder=0 # max_wal_senders: needed to configure slave. If zero value is specified,
# it is expected to supply this parameter explicitly by external files
# specified in the following. If you don't configure slaves, leave this value to zero.
coordMaxWALSenders=(0 0 0)
# max_wal_senders configuration for each coordinator.
#---- Slave -------------
coordSlave=n # Specify y if you configure at least one coordiantor slave. Otherwise, the following
# configuration parameters will be set to empty values.
# If no effective server names are found (that is, every servers are specified as none),
# then coordSlave value will be set to n and all the following values will be set to
# empty values.
##coordSlaveSync=y # Specify to connect with synchronized mode.
##coordSlaveServers=(node07 node08 node09 node06) # none means this slave is not available
##coordSlavePorts=(20004 20005 20004 20005) # Master ports
##coordSlavePoolerPorts=(20010 20011 20010 20011) # Master pooler ports
##coordSlaveDirs=($coordSlaveDir $coordSlaveDir $coordSlaveDir $coordSlaveDir)
##coordArchLogDirs=($coordArchLogDir $coordArchLogDir $coordArchLogDir $coordArchLogDir)
#---- Configuration files---
# Need these when you'd like setup specific non-default configuration
# These files will go to corresponding files for the master.
# You may supply your bash script to setup extra config lines and extra pg_hba.conf entries
# Or you may supply these files manually.
coordExtraConfig=coordExtraConfig # Extra configuration file for coordinators.
# This file will be added to all the coordinators'
# postgresql.conf
# Pleae note that the following sets up minimum parameters which you may want to change.
# You can put your postgresql.conf lines here.
cat > $coordExtraConfig <<EOF
#================================================
# Added to all the coordinator postgresql.conf
# Original: $coordExtraConfig
log_destination = 'stderr'
logging_collector = on
log_directory = 'pg_log'
listen_addresses = '*'
max_connections = 100
EOF
# Additional Configuration file for specific coordinator master.
# You can define each setting by similar means as above.
coordSpecificExtraConfig=(none none none)
coordExtraPgHba=none # Extra entry for pg_hba.conf. This file will be added to all the coordinators' pg_hba.conf
coordSpecificExtraPgHba=(none none none)
#----- Additional Slaves -----
#
# Please note that this section is just a suggestion how we extend the configuration for
# multiple and cascaded replication. They're not used in the current version.
#
##coordAdditionalSlaves=n # Additional slave can be specified as follows: where you
##coordAdditionalSlaveSet=(cad1) # Each specifies set of slaves. This case, two set of slaves are
# configured
##cad1_Sync=n # All the slaves at "cad1" are connected with asynchronous mode.
# If not, specify "y"
# The following lines specifies detailed configuration for each
# slave tag, cad1. You can define cad2 similarly.
##cad1_Servers=(node08 node09 node06 node07) # Hosts
##cad1_dir=$HOME/pgxc/nodes/coord_slave_cad1
##cad1_Dirs=($cad1_dir $cad1_dir $cad1_dir $cad1_dir)
##cad1_ArchLogDir=$HOME/pgxc/nodes/coord_archlog_cad1
##cad1_ArchLogDirs=($cad1_ArchLogDir $cad1_ArchLogDir $cad1_ArchLogDir $cad1_ArchLogDir)
#---- Datanodes -------------------------------------------------------------------------------------------------------
#---- Shortcuts --------------
datanodeMasterDir=/pgdata/datan
##datanodeSlaveDir=$HOME/pgxc/nodes/dn_slave
##datanodeArchLogDir=$HOME/pgxc/nodes/datanode_archlog
#---- Overall ---------------
primaryDatanode=none # Primary Node.
# At present, xc has a priblem to issue ALTER NODE against the primay node. Until it is fixed, the test will be done
# without this feature.
##primaryDatanode=datanode1 # Primary Node.
datanodeNames=(datan01 datan02 datan03)
datanodePorts=(25431 25432 25433) # Master ports
datanodePoolerPorts=(40401 40402 40403) # Master pooler ports
datanodePgHbaEntries=(0.0.0.0/0) # Assumes that all the coordinator (master/slave) accepts
# the same connection
# This list sets up pg_hba.conf for $pgxcOwner user.
# If you'd like to setup other entries, supply them
# through extra configuration files specified below.
# Note: The above parameter is extracted as "host all all 0.0.0.0/0 trust". If you don't want
# such setups, specify the value () to this variable and suplly what you want using datanodeExtraPgHba
# and/or datanodeSpecificExtraPgHba variables.
#datanodePgHbaEntries=(::1/128) # Same as above but for IPv6 addresses
#---- Master ----------------
datanodeMasterServers=(192.168.122.181 192.168.122.182 192.168.122.183) # none means this master is not available.
# This means that there should be the master but is down.
# The cluster is not operational until the master is
# recovered and ready to run.
datanodeMasterDirs=($datanodeMasterDir'01/data' $datanodeMasterDir'02/data' $datanodeMasterDir'03/data')
datanodeMaxWalSender=0 # max_wal_senders: needed to configure slave. If zero value is
# specified, it is expected this parameter is explicitly supplied
# by external configuration files.
# If you don't configure slaves, leave this value zero.
datanodeMaxWALSenders=(0 0 0)
# max_wal_senders configuration for each datanode
#---- Slave -----------------
datanodeSlave=n # Specify y if you configure at least one coordiantor slave. Otherwise, the following
# configuration parameters will be set to empty values.
# If no effective server names are found (that is, every servers are specified as none),
# then datanodeSlave value will be set to n and all the following values will be set to
# empty values.
##datanodeSlaveServers=(node07 node08 node09 node06) # value none means this slave is not available
##datanodeSlavePorts=(20008 20009 20008 20009) # value none means this slave is not available
##datanodeSlavePoolerPorts=(20012 20013 20012 20013) # value none means this slave is not available
##datanodeSlaveSync=y # If datanode slave is connected in synchronized mode
##datanodeSlaveDirs=($datanodeSlaveDir $datanodeSlaveDir $datanodeSlaveDir $datanodeSlaveDir)
##datanodeArchLogDirs=( $datanodeArchLogDir $datanodeArchLogDir $datanodeArchLogDir $datanodeArchLogDir )
# ---- Configuration files ---
# You may supply your bash script to setup extra config lines and extra pg_hba.conf entries here.
# These files will go to corresponding files for the master.
# Or you may supply these files manually.
datanodeExtraConfig=none # Extra configuration file for datanodes. This file will be added to all the
# datanodes' postgresql.conf
datanodeSpecificExtraConfig=(none none none)
datanodeExtraPgHba=none # Extra entry for pg_hba.conf. This file will be added to all the datanodes' postgresql.conf
datanodeSpecificExtraPgHba=(none none none)
#----- Additional Slaves -----
datanodeAdditionalSlaves=n # Additional slave can be specified as follows: where you
# datanodeAdditionalSlaveSet=(dad1 dad2) # Each specifies set of slaves. This case, two set of slaves are
# configured
# dad1_Sync=n # All the slaves at "cad1" are connected with asynchronous mode.
# If not, specify "y"
# The following lines specifies detailed configuration for each
# slave tag, cad1. You can define cad2 similarly.
# dad1_Servers=(node08 node09 node06 node07) # Hosts
# dad1_dir=$HOME/pgxc/nodes/coord_slave_cad1
# dad1_Dirs=($cad1_dir $cad1_dir $cad1_dir $cad1_dir)
# dad1_ArchLogDir=$HOME/pgxc/nodes/coord_archlog_cad1
# dad1_ArchLogDirs=($cad1_ArchLogDir $cad1_ArchLogDir $cad1_ArchLogDir $cad1_ArchLogDir)
#---- WAL archives -------------------------------------------------------------------------------------------------
walArchive=n # If you'd like to configure WAL archive, edit this section.
# Pgxc_ctl assumes that if you configure WAL archive, you configure it
# for all the coordinators and datanodes.
# Default is "no". Please specify "y" here to turn it on.
#
# End of Configuration Section
#
#==========================================================================================================================
#========================================================================================================================
# The following is for extension. Just demonstrate how to write such extension. There's no code
# which takes care of them so please ignore the following lines. They are simply ignored by pgxc_ctl.
# No side effects.
#=============<< Beginning of future extension demonistration >> ========================================================
# You can setup more than one backup set for various purposes, such as disaster recovery.
##walArchiveSet=(war1 war2)
##war1_source=(master) # you can specify master, slave or ano other additional slaves as a source of WAL archive.
# Default is the master
##wal1_source=(slave)
##wal1_source=(additiona_coordinator_slave_set additional_datanode_slave_set)
##war1_host=node10 # All the nodes are backed up at the same host for a given archive set
##war1_backupdir=$HOME/pgxc/backup_war1
##wal2_source=(master)
##war2_host=node11
##war2_backupdir=$HOME/pgxc/backup_war2
#=============<< End of future extension demonistration >> ========================================================
5. 通过pgxc_ctl进行初始化
进入gtm_mast
在postgres用户下使用 pgxc_ctl init all 命令进行初始化以下是输出结果:
# su - postgres
# pgxc_ctl init all
/bin/bash
Installing pgxc_ctl_bash script as /home/postgres/pgxc_ctl/pgxc_ctl_bash.
Installing pgxc_ctl_bash script as /home/postgres/pgxc_ctl/pgxc_ctl_bash.
Reading configuration using /home/postgres/pgxc_ctl/pgxc_ctl_bash --home /home/postgres/pgxc_ctl --configuration /home/postgres/pgxc_ctl/pgxc_ctl.conf
Finished reading configuration.
******** PGXC_CTL START ***************
Current directory: /home/postgres/pgxc_ctl
pgxc_ctl.conf 100% 17KB 17.3KB/s 00:00
Initialize GTM master
The files belonging to this GTM system will be owned by user "postgres".
This user must also own the server process.
fixing permissions on existing directory /pgdata/gtm/data ... ok
creating configuration files ... ok
creating control file ... ok
Success.
waiting for server to shut down.... done
server stopped
Done.
Start GTM master
server starting
Initialize GTM slave
The files belonging to this GTM system will be owned by user "postgres".
This user must also own the server process.
fixing permissions on existing directory /pgdata/gtm/data ... ok
creating configuration files ... ok
creating control file ... ok
Success.
Done.
Start GTM slaveserver starting
Done.
Initialize all the gtm proxies.
Initializing gtm proxy gtm_pxy01.
Initializing gtm proxy gtm_pxy02.
Initializing gtm proxy gtm_pxy03.
The files belonging to this GTM system will be owned by user "postgres".
This user must also own the server process.
fixing permissions on existing directory /pgdata/gtm_pxy01/data ... ok
creating configuration files ... ok
Success.
The files belonging to this GTM system will be owned by user "postgres".
This user must also own the server process.
fixing permissions on existing directory /pgdata/gtm_pxy02/data ... ok
creating configuration files ... ok
Success.
The files belonging to this GTM system will be owned by user "postgres".
This user must also own the server process.
fixing permissions on existing directory /pgdata/gtm_pxy03/data ... ok
creating configuration files ... ok
Success.
Done.
Starting all the gtm proxies.
Starting gtm proxy gtm_pxy01.
Starting gtm proxy gtm_pxy02.
Starting gtm proxy gtm_pxy03.
server starting
server starting
server starting
Done.
Initialize all the coordinator masters.
Initialize coordinator master coord01.
Initialize coordinator master coord02.
Initialize coordinator master coord03.
The files belonging to this database system will be owned by user "postgres".
This user must also own the server process.
The database cluster will be initialized with locale "zh_CN.UTF-8".
The default database encoding has accordingly been set to "UTF8".
initdb: could not find suitable text search configuration for locale "zh_CN.UTF-8"
The default text search configuration will be set to "simple".
Data page checksums are disabled.
fixing permissions on existing directory /pgdata/coord01/data ... ok
creating subdirectories ... ok
selecting default max_connections ... 100
selecting default shared_buffers ... 128MB
selecting dynamic shared memory implementation ... posix
creating configuration files ... ok
creating template1 database in /pgdata/coord01/data/base/1 ... ok
initializing pg_authid ... ok
initializing dependencies ... ok
creating system views ... ok
creating cluster information ... ok
loading system objects' descriptions ... ok
creating collations ... ok
creating conversions ... ok
creating dictionaries ... ok
setting privileges on built-in objects ... ok
creating information schema ... ok
loading PL/pgSQL server-side language ... ok
vacuuming database template1 ... ok
copying template1 to template0 ... ok
copying template1 to postgres ... ok
syncing data to disk ... ok
freezing database template0 ... ok
freezing database template1 ... ok
freezing database postgres ... ok
WARNING: enabling "trust" authentication for local connections
You can change this by editing pg_hba.conf or using the option -A, or
--auth-local and --auth-host, the next time you run initdb.
Success.
The files belonging to this database system will be owned by user "postgres".
This user must also own the server process.
The database cluster will be initialized with locale "zh_CN.UTF-8".
The default database encoding has accordingly been set to "UTF8".
initdb: could not find suitable text search configuration for locale "zh_CN.UTF-8"
The default text search configuration will be set to "simple".
Data page checksums are disabled.
fixing permissions on existing directory /pgdata/coord02/data ... ok
creating subdirectories ... ok
selecting default max_connections ... 100
selecting default shared_buffers ... 128MB
selecting dynamic shared memory implementation ... posix
creating configuration files ... ok
creating template1 database in /pgdata/coord02/data/base/1 ... ok
initializing pg_authid ... ok
initializing dependencies ... ok
creating system views ... ok
creating cluster information ... ok
loading system objects' descriptions ... ok
creating collations ... ok
creating conversions ... ok
creating dictionaries ... ok
setting privileges on built-in objects ... ok
creating information schema ... ok
loading PL/pgSQL server-side language ... ok
vacuuming database template1 ... ok
copying template1 to template0 ... ok
copying template1 to postgres ... ok
syncing data to disk ... ok
freezing database template0 ... ok
freezing database template1 ... ok
freezing database postgres ... ok
WARNING: enabling "trust" authentication for local connections
You can change this by editing pg_hba.conf or using the option -A, or
--auth-local and --auth-host, the next time you run initdb.
Success.
The files belonging to this database system will be owned by user "postgres".
This user must also own the server process.
The database cluster will be initialized with locale "zh_CN.UTF-8".
The default database encoding has accordingly been set to "UTF8".
initdb: could not find suitable text search configuration for locale "zh_CN.UTF-8"
The default text search configuration will be set to "simple".
Data page checksums are disabled.
fixing permissions on existing directory /pgdata/coord03/data ... ok
creating subdirectories ... ok
selecting default max_connections ... 100
selecting default shared_buffers ... 128MB
selecting dynamic shared memory implementation ... posix
creating configuration files ... ok
creating template1 database in /pgdata/coord03/data/base/1 ... ok
initializing pg_authid ... ok
initializing dependencies ... ok
creating system views ... ok
creating cluster information ... ok
loading system objects' descriptions ... ok
creating collations ... ok
creating conversions ... ok
creating dictionaries ... ok
setting privileges on built-in objects ... ok
creating information schema ... ok
loading PL/pgSQL server-side language ... ok
vacuuming database template1 ... ok
copying template1 to template0 ... ok
copying template1 to postgres ... ok
syncing data to disk ... ok
freezing database template0 ... ok
freezing database template1 ... ok
freezing database postgres ... ok
WARNING: enabling "trust" authentication for local connections
You can change this by editing pg_hba.conf or using the option -A, or
--auth-local and --auth-host, the next time you run initdb.
Success.
Done.
Starting coordinator master.
Starting coordinator master coord01
Starting coordinator master coord02
Starting coordinator master coord03
LOG: redirecting log output to logging collector process
HINT: Future log output will appear in directory "pg_log".
LOG: redirecting log output to logging collector process
HINT: Future log output will appear in directory "pg_log".
LOG: redirecting log output to logging collector process
HINT: Future log output will appear in directory "pg_log".
Done.
Initialize all the datanode masters.
Initialize the datanode master datan01.
Initialize the datanode master datan02.
Initialize the datanode master datan03.
The files belonging to this database system will be owned by user "postgres".
This user must also own the server process.
The database cluster will be initialized with locale "zh_CN.UTF-8".
The default database encoding has accordingly been set to "UTF8".
initdb: could not find suitable text search configuration for locale "zh_CN.UTF-8"
The default text search configuration will be set to "simple".
Data page checksums are disabled.
fixing permissions on existing directory /pgdata/datan01/data ... ok
creating subdirectories ... ok
selecting default max_connections ... 100
selecting default shared_buffers ... 128MB
selecting dynamic shared memory implementation ... posix
creating configuration files ... ok
creating template1 database in /pgdata/datan01/data/base/1 ... ok
initializing pg_authid ... ok
initializing dependencies ... ok
creating system views ... ok
creating cluster information ... ok
loading system objects' descriptions ... ok
creating collations ... ok
creating conversions ... ok
creating dictionaries ... ok
setting privileges on built-in objects ... ok
creating information schema ... ok
loading PL/pgSQL server-side language ... ok
vacuuming database template1 ... ok
copying template1 to template0 ... ok
copying template1 to postgres ... ok
syncing data to disk ... ok
freezing database template0 ... ok
freezing database template1 ... ok
freezing database postgres ... ok
WARNING: enabling "trust" authentication for local connections
You can change this by editing pg_hba.conf or using the option -A, or
--auth-local and --auth-host, the next time you run initdb.
Success.
The files belonging to this database system will be owned by user "postgres".
This user must also own the server process.
The database cluster will be initialized with locale "zh_CN.UTF-8".
The default database encoding has accordingly been set to "UTF8".
initdb: could not find suitable text search configuration for locale "zh_CN.UTF-8"
The default text search configuration will be set to "simple".
Data page checksums are disabled.
fixing permissions on existing directory /pgdata/datan02/data ... ok
creating subdirectories ... ok
selecting default max_connections ... 100
selecting default shared_buffers ... 128MB
selecting dynamic shared memory implementation ... posix
creating configuration files ... ok
creating template1 database in /pgdata/datan02/data/base/1 ... ok
initializing pg_authid ... ok
initializing dependencies ... ok
creating system views ... ok
creating cluster information ... ok
loading system objects' descriptions ... ok
creating collations ... ok
creating conversions ... ok
creating dictionaries ... ok
setting privileges on built-in objects ... ok
creating information schema ... ok
loading PL/pgSQL server-side language ... ok
vacuuming database template1 ... ok
copying template1 to template0 ... ok
copying template1 to postgres ... ok
syncing data to disk ... ok
freezing database template0 ... ok
freezing database template1 ... ok
freezing database postgres ... ok
WARNING: enabling "trust" authentication for local connections
You can change this by editing pg_hba.conf or using the option -A, or
--auth-local and --auth-host, the next time you run initdb.
Success.
The files belonging to this database system will be owned by user "postgres".
This user must also own the server process.
The database cluster will be initialized with locale "zh_CN.UTF-8".
The default database encoding has accordingly been set to "UTF8".
initdb: could not find suitable text search configuration for locale "zh_CN.UTF-8"
The default text search configuration will be set to "simple".
Data page checksums are disabled.
fixing permissions on existing directory /pgdata/datan03/data ... ok
creating subdirectories ... ok
selecting default max_connections ... 100
selecting default shared_buffers ... 128MB
selecting dynamic shared memory implementation ... posix
creating configuration files ... ok
creating template1 database in /pgdata/datan03/data/base/1 ... ok
initializing pg_authid ... ok
initializing dependencies ... ok
creating system views ... ok
creating cluster information ... ok
loading system objects' descriptions ... ok
creating collations ... ok
creating conversions ... ok
creating dictionaries ... ok
setting privileges on built-in objects ... ok
creating information schema ... ok
loading PL/pgSQL server-side language ... ok
vacuuming database template1 ... ok
copying template1 to template0 ... ok
copying template1 to postgres ... ok
syncing data to disk ... ok
freezing database template0 ... ok
freezing database template1 ... ok
freezing database postgres ... ok
WARNING: enabling "trust" authentication for local connections
You can change this by editing pg_hba.conf or using the option -A, or
--auth-local and --auth-host, the next time you run initdb.
Success.
Done.
Starting all the datanode masters.
Starting datanode master datan01.
Starting datanode master datan02.
Starting datanode master datan03.
LOG: redirecting log output to logging collector process
HINT: Future log output will appear in directory "pg_log".
LOG: redirecting log output to logging collector process
HINT: Future log output will appear in directory "pg_log".
LOG: redirecting log output to logging collector process
HINT: Future log output will appear in directory "pg_log".
Done.
ALTER NODE coord01 WITH (HOST='192.168.122.171', PORT=15432);
ALTER NODE
CREATE NODE coord02 WITH (TYPE='coordinator', HOST='192.168.122.172', PORT=15432);
CREATE NODE
CREATE NODE coord03 WITH (TYPE='coordinator', HOST='192.168.122.173', PORT=15432);
CREATE NODE
CREATE NODE datan01 WITH (TYPE='datanode', HOST='192.168.122.181', PORT=25431);
CREATE NODE
CREATE NODE datan02 WITH (TYPE='datanode', HOST='192.168.122.182', PORT=25432);
CREATE NODE
CREATE NODE datan03 WITH (TYPE='datanode', HOST='192.168.122.183', PORT=25433);
CREATE NODE
SELECT pgxc_pool_reload();
pgxc_pool_reload
------------------
t
(1 row)
CREATE NODE coord01 WITH (TYPE='coordinator', HOST='192.168.122.171', PORT=15432);
CREATE NODE
ALTER NODE coord02 WITH (HOST='192.168.122.172', PORT=15432);
ALTER NODE
CREATE NODE coord03 WITH (TYPE='coordinator', HOST='192.168.122.173', PORT=15432);
CREATE NODE
CREATE NODE datan01 WITH (TYPE='datanode', HOST='192.168.122.181', PORT=25431);
CREATE NODE
CREATE NODE datan02 WITH (TYPE='datanode', HOST='192.168.122.182', PORT=25432);
CREATE NODE
CREATE NODE datan03 WITH (TYPE='datanode', HOST='192.168.122.183', PORT=25433);
CREATE NODE
SELECT pgxc_pool_reload();
pgxc_pool_reload
------------------
t
(1 row)
CREATE NODE coord01 WITH (TYPE='coordinator', HOST='192.168.122.171', PORT=15432);
CREATE NODE
CREATE NODE coord02 WITH (TYPE='coordinator', HOST='192.168.122.172', PORT=15432);
CREATE NODE
ALTER NODE coord03 WITH (HOST='192.168.122.173', PORT=15432);
ALTER NODE
CREATE NODE datan01 WITH (TYPE='datanode', HOST='192.168.122.181', PORT=25431);
CREATE NODE
CREATE NODE datan02 WITH (TYPE='datanode', HOST='192.168.122.182', PORT=25432);
CREATE NODE
CREATE NODE datan03 WITH (TYPE='datanode', HOST='192.168.122.183', PORT=25433);
CREATE NODE
SELECT pgxc_pool_reload();
pgxc_pool_reload
------------------
t
(1 row)
Done.
EXECUTE DIRECT ON (datan01) 'CREATE NODE coord01 WITH (TYPE=''coordinator'', HOST=''192.168.122.171'', PORT=15432)';
EXECUTE DIRECT
EXECUTE DIRECT ON (datan01) 'CREATE NODE coord02 WITH (TYPE=''coordinator'', HOST=''192.168.122.172'', PORT=15432)';
EXECUTE DIRECT
EXECUTE DIRECT ON (datan01) 'CREATE NODE coord03 WITH (TYPE=''coordinator'', HOST=''192.168.122.173'', PORT=15432)';
EXECUTE DIRECT
EXECUTE DIRECT ON (datan01) 'ALTER NODE datan01 WITH (TYPE=''datanode'', HOST=''192.168.122.181'', PORT=25431)';
EXECUTE DIRECT
EXECUTE DIRECT ON (datan01) 'CREATE NODE datan02 WITH (TYPE=''datanode'', HOST=''192.168.122.182'', PORT=25432)';
EXECUTE DIRECT
EXECUTE DIRECT ON (datan01) 'CREATE NODE datan03 WITH (TYPE=''datanode'', HOST=''192.168.122.183'', PORT=25433)';
EXECUTE DIRECT
EXECUTE DIRECT ON (datan01) 'SELECT pgxc_pool_reload()';
pgxc_pool_reload
------------------
t
(1 row)
EXECUTE DIRECT ON (datan02) 'CREATE NODE coord01 WITH (TYPE=''coordinator'', HOST=''192.168.122.171'', PORT=15432)';
EXECUTE DIRECT
EXECUTE DIRECT ON (datan02) 'CREATE NODE coord02 WITH (TYPE=''coordinator'', HOST=''192.168.122.172'', PORT=15432)';
EXECUTE DIRECT
EXECUTE DIRECT ON (datan02) 'CREATE NODE coord03 WITH (TYPE=''coordinator'', HOST=''192.168.122.173'', PORT=15432)';
EXECUTE DIRECT
EXECUTE DIRECT ON (datan02) 'CREATE NODE datan01 WITH (TYPE=''datanode'', HOST=''192.168.122.181'', PORT=25431)';
EXECUTE DIRECT
EXECUTE DIRECT ON (datan02) 'ALTER NODE datan02 WITH (TYPE=''datanode'', HOST=''192.168.122.182'', PORT=25432)';
EXECUTE DIRECT
EXECUTE DIRECT ON (datan02) 'CREATE NODE datan03 WITH (TYPE=''datanode'', HOST=''192.168.122.183'', PORT=25433)';
EXECUTE DIRECT
EXECUTE DIRECT ON (datan02) 'SELECT pgxc_pool_reload()';
pgxc_pool_reload
------------------
t
(1 row)
EXECUTE DIRECT ON (datan03) 'CREATE NODE coord01 WITH (TYPE=''coordinator'', HOST=''192.168.122.171'', PORT=15432)';
EXECUTE DIRECT
EXECUTE DIRECT ON (datan03) 'CREATE NODE coord02 WITH (TYPE=''coordinator'', HOST=''192.168.122.172'', PORT=15432)';
EXECUTE DIRECT
EXECUTE DIRECT ON (datan03) 'CREATE NODE coord03 WITH (TYPE=''coordinator'', HOST=''192.168.122.173'', PORT=15432)';
EXECUTE DIRECT
EXECUTE DIRECT ON (datan03) 'CREATE NODE datan01 WITH (TYPE=''datanode'', HOST=''192.168.122.181'', PORT=25431)';
EXECUTE DIRECT
EXECUTE DIRECT ON (datan03) 'CREATE NODE datan02 WITH (TYPE=''datanode'', HOST=''192.168.122.182'', PORT=25432)';
EXECUTE DIRECT
EXECUTE DIRECT ON (datan03) 'ALTER NODE datan03 WITH (TYPE=''datanode'', HOST=''192.168.122.183'', PORT=25433)';
EXECUTE DIRECT
EXECUTE DIRECT ON (datan03) 'SELECT pgxc_pool_reload()';
pgxc_pool_reload
------------------
t
(1 row)
Done.
初始化完成,可以用 pgxc_ctl monitor all 对所有服务的状态进行观察:
# pgxc_ctl monitor all
/bin/bash
Installing pgxc_ctl_bash script as /home/postgres/pgxc_ctl/pgxc_ctl_bash.
Installing pgxc_ctl_bash script as /home/postgres/pgxc_ctl/pgxc_ctl_bash.
Reading configuration using /home/postgres/pgxc_ctl/pgxc_ctl_bash --home /home/postgres/pgxc_ctl --configuration /home/postgres/pgxc_ctl/pgxc_ctl.conf
Finished reading configuration.
******** PGXC_CTL START ***************
Current directory: /home/postgres/pgxc_ctl
Running: gtm master
Running: gtm slave
Running: gtm proxy gtm_pxy01
Running: gtm proxy gtm_pxy02
Running: gtm proxy gtm_pxy03
Running: coordinator master coord01
Running: coordinator master coord02
Running: coordinator master coord03
Running: datanode master datan01
Running: datanode master datan02
Running: datanode master datan03
6. 修改datanode的gtm地址
为了后面datanode节点切换服务器的时候能够注册上gtm proxy,现在将每个datanode节点的配置文件里面的gtm地址配置为datanode自己的服务ip。这里只演示修改datan01:
# gtm_mast下运行,停掉datanode datan01的服务
pgxc_ctl stop datanode datan01
# datan01下运行,编辑配置文件,修改gtm proxy连接地址
su - postgres
cd /pgdata/datan01/data/
vi postgresql.conf
tail -n 3 postgresql.conf
# 以下为修改后的显示结果,192.168.122.181是datan01的服务ip,从/etc/hosts可以看到
gtm_host = '192.168.122.181'
gtm_port = 20001
# End of Addition
# gtm_mast下运行
pgxc_ctl start datanode datan01
**粗体** _斜体_ [链接](http://example.com) `代码` - 列表 > 引用
。你还可以使用@
来通知其他用户。